110 research outputs found

    Limits to the Development of Feed-Forward Structures in Large Recurrent Neuronal Networks

    Get PDF
    Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it

    Limits to the Development of Feed-Forward Structures in Large Recurrent Neuronal Networks

    Get PDF
    Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it

    Routing brain traffic through the von Neumann bottleneck: Efficient cache usage in spiking neural network simulation code on general purpose computers

    Full text link
    Simulation is a third pillar next to experiment and theory in the study of complex dynamic systems such as biological neural networks. Contemporary brain-scale networks correspond to directed graphs of a few million nodes, each with an in-degree and out-degree of several thousands of edges, where nodes and edges correspond to the fundamental biological units, neurons and synapses, respectively. When considering a random graph, each node's edges are distributed across thousands of parallel processes. The activity in neuronal networks is also sparse. Each neuron occasionally transmits a brief signal, called spike, via its outgoing synapses to the corresponding target neurons. This spatial and temporal sparsity represents an inherent bottleneck for simulations on conventional computers: Fundamentally irregular memory-access patterns cause poor cache utilization. Using an established neuronal network simulation code as a reference implementation, we investigate how common techniques to recover cache performance such as software-induced prefetching and software pipelining can benefit a real-world application. The algorithmic changes reduce simulation time by up to 50%. The study exemplifies that many-core systems assigned with an intrinsically parallel computational problem can overcome the von Neumann bottleneck of conventional computer architectures

    Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring.

    Get PDF
    Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching

    Evaluating differently tutored groups in problem-based learning in a German dental curriculum: a mixed methods study

    Get PDF
    Background: It is still unclear to what extent the PBL tutor affects learning in PBL-sessions. This mixed-methods study (Part 1 and 2) evaluated the effects of facilitative (f) versus non-facilitative (nf) tutoring roles on knowledge-gain and group functioning in the field of endodontics. Methods: Part 1 was a quantitative assessment of tutor effectiveness within a prospective, experimental, single-blind, stratified, randomized, two-group intervention study. Participants attended PBL in the context of a hybrid curriculum. A validated questionnaire was used and knowledge assessments were conducted before and after the intervention. External observers rated tutor performance. Part 2 was a qualitative assessment of tutor effectiveness and consisted of semi-structured expert interviews with tutors and focus group discussions with students. Results: Part 1: f tutors obtained significantly higher scores than nf tutors with respect to learning motivation and tutor effectiveness (p ≤ 0.05). nf tuition resulted in a slightly larger knowledge gain (p = 0.08). External observers documented a significantly higher activity among facilitative tutors compared to non-facilitative tutors. Part 2: Tutors found the f role easier although this led to a less autonomous working climate. The students rated f tutoring as positive in all cases. Conclusions: With respect to PBL-group performance, students felt that groups guided in a non-facilitative fashion exhibited a higher level of independence and autonomy, especially with increasing PBL experience. In addition, students reported that more preparation was necessary for sessions guided by a non-facilitative tutor. Tutors were able to modify their role and influence group processes in a controlled manner. Results are useful for future “Train-the-Teacher” sessions

    Which comforting messages really work best? A different perspective on Lemieux and Tighe’s “receiver perspective”

    Get PDF
    Abstract OnlyThis article responds critically to a recent article by Lemieux and Tighe (Communication Research Reports, 21, 144–153, 2004) in which the authors conclude that recipients of comforting efforts prefer messages that exhibit a moderate rather than high level of person centeredness. It is argued that an erroneous assumption made by Lemieux and Tighe about the status of “receiver perspective” research on the comforting process led to faulty interpretations of the data and unwarranted conclusions about recipient preferences regarding comforting messages. Alternative interpretations of Lemieux and Tighe's data are presented; these are guided by the extensive previous research that has assessed evaluations and outcomes of comforting messages

    Phenomenological modeling of diverse and heterogeneous synaptic dynamics at natural density

    Full text link
    This chapter sheds light on the synaptic organization of the brain from the perspective of computational neuroscience. It provides an introductory overview on how to account for empirical data in mathematical models, implement them in software, and perform simulations reflecting experiments. This path is demonstrated with respect to four key aspects of synaptic signaling: the connectivity of brain networks, synaptic transmission, synaptic plasticity, and the heterogeneity across synapses. Each step and aspect of the modeling and simulation workflow comes with its own challenges and pitfalls, which are highlighted and addressed in detail.Comment: 35 pages, 3 figure

    Meta-analysis of gene expression profiles in breast cancer: toward a unified understanding of breast cancer subtyping and prognosis signatures

    Get PDF
    INTRODUCTION: Breast cancer subtyping and prognosis have been studied extensively by gene expression profiling, resulting in disparate signatures with little overlap in their constituent genes. Although a previous study demonstrated a prognostic concordance among gene expression signatures, it was limited to only one dataset and did not fully elucidate how the different genes were related to one another nor did it examine the contribution of well-known biological processes of breast cancer tumorigenesis to their prognostic performance. METHOD: To address the above issues and to further validate these initial findings, we performed the largest meta-analysis of publicly available breast cancer gene expression and clinical data, which are comprised of 2,833 breast tumors. Gene coexpression modules of three key biological processes in breast cancer (namely, proliferation, estrogen receptor [ER], and HER2 signaling) were used to dissect the role of constituent genes of nine prognostic signatures. RESULTS: Using a meta-analytical approach, we consolidated the signatures associated with ER signaling, ERBB2 amplification, and proliferation. Previously published expression-based nomenclature of breast cancer 'intrinsic' subtypes can be mapped to the three modules, namely, the ER-/HER2- (basal-like), the HER2+ (HER2-like), and the low- and high-proliferation ER+/HER2- subtypes (luminal A and B). We showed that all nine prognostic signatures exhibited a similar prognostic performance in the entire dataset. Their prognostic abilities are due mostly to the detection of proliferation activity. Although ER- status (basal-like) and ERBB2+ expression status correspond to bad outcome, they seem to act through elevated expression of proliferation genes and thus contain only indirect information about prognosis. Clinical variables measuring the extent of tumor progression, such as tumor size and nodal status, still add independent prognostic information to proliferation genes. CONCLUSION: This meta-analysis unifies various results of previous gene expression studies in breast cancer. It reveals connections between traditional prognostic factors, expression-based subtyping, and prognostic signatures, highlighting the important role of proliferation in breast cancer prognosis.Journal ArticleMeta-AnalysisResearch Support, Non-U.S. Gov'tSCOPUS: ar.jinfo:eu-repo/semantics/publishe
    corecore